526 research outputs found

    Convergence results for projected line-search methods on varieties of low-rank matrices via \L{}ojasiewicz inequality

    Full text link
    The aim of this paper is to derive convergence results for projected line-search methods on the real-algebraic variety M≤k\mathcal{M}_{\le k} of real m×nm \times n matrices of rank at most kk. Such methods extend Riemannian optimization methods, which are successfully used on the smooth manifold Mk\mathcal{M}_k of rank-kk matrices, to its closure by taking steps along gradient-related directions in the tangent cone, and afterwards projecting back to M≤k\mathcal{M}_{\le k}. Considering such a method circumvents the difficulties which arise from the nonclosedness and the unbounded curvature of Mk\mathcal{M}_k. The pointwise convergence is obtained for real-analytic functions on the basis of a \L{}ojasiewicz inequality for the projection of the antigradient to the tangent cone. If the derived limit point lies on the smooth part of M≤k\mathcal{M}_{\le k}, i.e. in Mk\mathcal{M}_k, this boils down to more or less known results, but with the benefit that asymptotic convergence rate estimates (for specific step-sizes) can be obtained without an a priori curvature bound, simply from the fact that the limit lies on a smooth manifold. At the same time, one can give a convincing justification for assuming critical points to lie in Mk\mathcal{M}_k: if XX is a critical point of ff on M≤k\mathcal{M}_{\le k}, then either XX has rank kk, or ∇f(X)=0\nabla f(X) = 0

    Low rank tensor recovery via iterative hard thresholding

    Full text link
    We study extensions of compressive sensing and low rank matrix recovery (matrix completion) to the recovery of low rank tensors of higher order from a small number of linear measurements. While the theoretical understanding of low rank matrix recovery is already well-developed, only few contributions on the low rank tensor recovery problem are available so far. In this paper, we introduce versions of the iterative hard thresholding algorithm for several tensor decompositions, namely the higher order singular value decomposition (HOSVD), the tensor train format (TT), and the general hierarchical Tucker decomposition (HT). We provide a partial convergence result for these algorithms which is based on a variant of the restricted isometry property of the measurement operator adapted to the tensor decomposition at hand that induces a corresponding notion of tensor rank. We show that subgaussian measurement ensembles satisfy the tensor restricted isometry property with high probability under a certain almost optimal bound on the number of measurements which depends on the corresponding tensor format. These bounds are extended to partial Fourier maps combined with random sign flips of the tensor entries. Finally, we illustrate the performance of iterative hard thresholding methods for tensor recovery via numerical experiments where we consider recovery from Gaussian random measurements, tensor completion (recovery of missing entries), and Fourier measurements for third order tensors.Comment: 34 page

    Adaptive stochastic Galerkin FEM for lognormal coefficients in hierarchical tensor representations

    Get PDF
    Stochastic Galerkin methods for non-affine coefficient representations are known to cause major difficulties from theoretical and numerical points of view. In this work, an adaptive Galerkin FE method for linear parametric PDEs with lognormal coefficients discretized in Hermite chaos polynomials is derived. It employs problem-adapted function spaces to ensure solvability of the variational formulation. The inherently high computational complexity of the parametric operator is made tractable by using hierarchical tensor representations. For this, a new tensor train format of the lognormal coefficient is derived and verified numerically. The central novelty is the derivation of a reliable residual-based a posteriori error estimator. This can be regarded as a unique feature of stochastic Galerkin methods. It allows for an adaptive algorithm to steer the refinements of the physical mesh and the anisotropic Wiener chaos polynomial degrees. For the evaluation of the error estimator to become feasible, a numerically efficient tensor format discretization is developed. Benchmark examples with unbounded lognormal coefficient fields illustrate the performance of the proposed Galerkin discretization and the fully adaptive algorithm

    A Note on Multilevel Based Error Estimation

    Get PDF
    By employing the infinite multilevel representation of the residual, we derive computable bounds to estimate the distance of finite element approximations to the solution of the Poisson equation. If the finite element approximation is a Galerkin solution, the derived error estimator coincides with the standard element and edge based estimator. If Galerkin orthogonality is not satisfied, then the discrete residual additionally appears in terms of the BPX preconditioner. As a by-product of the present analysis, conditions are derived such that the hierarchical error estimation is reliable and efficient
    • …
    corecore